Physics-informed Neural Networks (PINN)


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Table of Contents

1. Why Deep Learning Needs Physics?

Why do data-driven ‘black-box’ methods fail?

  • May output result that is physically inconsistent
  • Easy to learn spurious relationships that look good only on training and test data
    • Can lead to poor generalization outside the available data (out-of-sample prediction tasks)
  • Interpretability is absent
    • Discovering the mechanism of an underlying process is crucial for scientific advancements

Physics-Informed Neural Networks (PINNs)

  • Take full advantage of data science methods with the accumulated prior knowledge of scientific theories $\rightarrow$ Improve predictive performance
  • Integration of domain knowledge to overcome the issue of imbalanced data & data shortage

1.1. Taxonomy of Informed Deep Learning



1.2. Multilayer Feedforward Networks are Universal Approximators

The Universal Approximation Theorem

  • Neural Networks are capable of approximating any Borel measurable function
  • Neural Networks (1989)



1.3. Neural Networks for Solving Differential Equations

Neural Algorithm for Solving Differential Equations

  • Journal of Computational Physics (1990)
  • Neural minimization for finite difference equation



ANN for ODE and PDE

  • IEEE on Neural Networks (1998)



1.4. Journal of Computational Physics (2019)






1.5. Nature Reviews Physics (2021)




2. Architecture of Physics-informed Neural Networks (PINN)

NN as an universal function approximator

Given

  • ODE or PDE
$$ \frac{\partial u}{\partial t} - \lambda \frac{\partial^2 u}{\partial x^2} = 0 $$
  • Some measured data from initial and boundary conditions

Adding constraints for regularization

  • Regularized by physics, but matched with sparse data
$$$$$$ \begin{align*} \mathcal{L} &= \omega_{\text{data}} \mathcal{L}_{\text{data}} + \omega_{\text{PDE}} \mathcal{L}_{\text{PDE}}\\\\ \mathcal{L}_{\text{data}} &= \frac{1}{N_{\text{data}}} \sum \left(u(x,t) - u \right)^2 \\ \mathcal{L}_{\text{PDE}} &= \frac{1}{N_{\text{PDE}}} \sum \left(\frac{\partial u}{\partial t} - \lambda \frac{\partial^2 u}{\partial x^2} \right)^2 \end{align*} $$




3. Methond for Solving ODE with Neural Networks

3.1. Background

This is a result first due to Lagaris et.al from 1998. The idea is to solve differential equations using neural networks by representing the solution by a neural network and training the resulting network to satisfy the conditions required by the differential equation.

Consider a system of ordinary differential equations

$$ {u^\prime} = f(u,t) $$

with $t \in [0, 1]$ and a known initial condition $u(0) = u_0$.

To solve this, we approximate the solution by a neural network:

$$ {\text{NN}(t)} \approx {u(t)} $$


If $\text{NN}(t)$ was the true solution, then it would hold that $\text{NN}'(t) = f\left(\text{NN}(t),t \right)$ for all $t$. Thus we turn this condtion into our loss function. This motivates the loss function.


$${L(\omega)} = \sum_{i} \left(\frac{d \text{NN}(t_i)}{dt}-f\left(\text{NN}(t_i),t_i \right)\right)^2$$

The choice of $t_i$ could be done in many ways: it can be random, it can be a grid, etc. Anyways, when this loss function is minimized (gradients computed with standard reverse-mode automatic differentiation), then we have that $\frac{dNN(t_i)}{dt} \approx f(\text{NN}(t_i),t_i)$ and thus $\text{NN}(t)$ approximately solves the differential equation.

Note that we still have to handle the initial condition. One simple way to do this is to add an initial condition term to the cost function.


$${L(\omega)} = \sum_{i} \left(\frac{d \text{NN}(t_i)}{dt}-f(\text{NN}(t_i),t_i)\right)^2 + (\text{NN}(0)-u_0)^2 $$

where $\omega$ are the parameters that define the neural network $\text{NN}$ that approximates $u$. Thus this reduces down, once again, to simply finding weights which minimize a loss function !

3.2. Lab 1: Simple Example

  • Let's look at the ODE:


$$\frac{du}{dt} = \cos2\pi t$$

  • Boundary condition:


$$u(0) = 1$$

  • The exact solution:


$$u(t) = \frac{1}{2\pi}\sin2\pi t + 1$$

In [2]:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import random
import sys

3.2.0. Setup

  • Using tensorFlow 2
  • Install and Setup
    • TensorFlow >= 2.2.0 and TensorFlow Probability >= 0.10.0
    • If you have TensorFlow 2.2.0, TensorFlow Probability >= 0.10.0 should be installed
      (TensorFlow 2.3.0 matches with TensorFlow Probability 0.11.0 and so on)
In [1]:
# !pip install tensorflow-probability

3.2.1.Define Network and Hyper-parameters

In [3]:
def NNODE():
    inputs = tf.keras.layers.Input((1,))
    
    layer1 = tf.keras.layers.Dense(32, activation = 'tanh')(inputs)
    outputs = tf.keras.layers.Dense(1)(layer1)

    model = tf.keras.models.Model(inputs = inputs, outputs = outputs)
    return model

NN = NNODE()
NN.summary()
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 1)]               0         
                                                                 
 dense (Dense)               (None, 32)                64        
                                                                 
 dense_1 (Dense)             (None, 1)                 33        
                                                                 
=================================================================
Total params: 97
Trainable params: 97
Non-trainable params: 0
_________________________________________________________________
In [4]:
optm = tf.keras.optimizers.Adam(learning_rate = 0.001)

3.2.2. Define ODE System

In [5]:
def ode_system(x, net):
    x_size = np.shape(x)[0]
    x = tf.constant(x, dtype=tf.float32)    
    x_0 = tf.zeros((x_size,1))
    one = tf.ones((x_size, 1))   
    
    with tf.GradientTape() as g:
        g.watch(x)
                    
        y = NN(x)
        y_x = g.gradient(y,x)
    
    ode_loss = y_x - tf.math.cos(2*np.pi*x) 
    bc_loss = net(x_0) - one  
    loss_square = tf.square(ode_loss) + tf.square(bc_loss)
    total_loss = tf.reduce_mean(loss_square)
    
    return total_loss

3.2.3. Train & Prediction

In [6]:
n_itr = 5000
n_prt = 500

hist = {}
hist['loss'] = []

np.random.seed(2022)
t_train = (np.random.rand(30) * 2).reshape(-1, 1)
t_test = np.linspace(0, 2, 100)
train_loss_record = []
test_loss_record = []

def train_step():

    with tf.GradientTape() as tape:
        train_loss = ode_system(t_train, NN)
        test_loss = ode_system(t_test, NN)
        test_loss_record.append(test_loss)
        train_loss_record.append(train_loss)
    gradients = tape.gradient(train_loss, NN.trainable_variables)
    optm.apply_gradients(zip(gradients, NN.trainable_variables))
    hist['loss'].append(train_loss.numpy())
In [7]:
def train():
    for itr in range(n_itr+1):
        train_step()
        sys.stdout.write('\r Training...({0}/{1})'.format(itr,n_itr))
        if (itr)%n_prt == 0: 
            print(', Loss: {}'.format(hist['loss'][-1]))
In [8]:
train()
 Training...(0/5000), Loss: 1.5551470518112183
 Training...(500/5000), Loss: 0.4967832565307617
 Training...(1000/5000), Loss: 0.45323580503463745
 Training...(1500/5000), Loss: 0.33198651671409607
 Training...(2000/5000), Loss: 0.26978710293769836
 Training...(2500/5000), Loss: 0.15746758878231049
 Training...(3000/5000), Loss: 0.03731522709131241
 Training...(3500/5000), Loss: 0.007777300663292408
 Training...(4000/5000), Loss: 0.0017221815651282668
 Training...(4500/5000), Loss: 0.0011155323591083288
 Training...(5000/5000), Loss: 0.0008534438093192875
In [9]:
plt.figure()
plt.semilogy(train_loss_record, label="Train loss")
plt.semilogy(test_loss_record, label="Test loss")
plt.legend()
plt.show()
In [10]:
t_true = np.linspace(0, 2, 100)
result = NN.predict(t_true).ravel()
g_true = np.sin(2*np.pi*t_true)/(2*np.pi) + 1
g = np.sin(2*np.pi*t_train)/(2*np.pi) + 1

plt.figure()
plt.plot(t_train, g, 'ok', label = 'Train')
plt.plot(t_true, g_true, '-k',label = 'True')
plt.plot(t_true, result, '--r', label = 'Prediction')
plt.legend()
plt.xlabel('x')
plt.ylabel('y')
plt.show()

4. Lab 2: Solve Lab 1 Again using DeepXDE

4.1. DeepXDE

  • Using DeepXDE libray
  • DeepXDE is a useful library for solving forward and inverse partial differential equations (PDEs) via physics-informed neural network (PINN)
In [11]:
# !pip install deepxde
  • Change DeepXDE backends
    • DeepXDE supports backends for TensorFlow 1.x, TensorFlow 2.x, and PyTorch. We have to change it into TensorFlow 2.x
In [12]:
from deepxde.backend.set_default_backend import set_default_backend
set_default_backend("tensorflow")
Using backend: tensorflow

Setting the default backend to "tensorflow". You can change it in the ~/.deepxde/config.json file or export the DDEBACKEND environment variable. Valid options are: tensorflow.compat.v1, tensorflow, pytorch (all lowercase)

4.2. Problem Setup Again

  • ODE


$$\frac{du}{dt} = \cos2\pi t$$

  • Boundary condition:


$$u(0) = 1$$

  • The exact solution:


$$u(t) = \frac{1}{2\pi}\sin2\pi t + 1$$

In [13]:
import tensorflow as tf
import deepxde as dde
import numpy as np
import matplotlib.pyplot as plt
import math as m

4.2.1. Define ODE System

In [14]:
pi = tf.constant(m.pi)

def ode_system(t, u):
    du_t = dde.grad.jacobian(u, t)
    return du_t - tf.math.cos(2*pi*t)

4.2.2. Define Initial Condition

In [15]:
def boundary(x, on_initial):
    return np.isclose(x[0], 0)

4.2.3. Define Geometry & Implement Initial Condition

In [16]:
geom = dde.geometry.TimeDomain(0, 2)

ic = dde.IC(geom, lambda x: 1, boundary)

# Reference solution to compute the error
def func(t):
    return np.sin(2*np.pi*t)/(2*np.pi) + 1

data = dde.data.PDE(geom, 
                    ode_system, 
                    ic, 
                    num_domain = 28, 
                    num_boundary = 2, 
                    solution = func, 
                    num_test = 100)
/mnt/disk1/seungmin/env_378/lib/python3.7/site-packages/skopt/sampler/sobol.py:250: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+30=30. 
  total_n_samples))
In [17]:
print('Total training points: ', data.train_x_all.shape)
print('Training initial points: ', data.train_x_bc.shape)

plt.plot(data.train_x_all, np.zeros([data.train_x_all.shape[0], 1]), '.')
plt.xlabel('Time (t)')
plt.show()
Total training points:  (30, 1)
Training initial points:  (1, 1)

4.2.4. Define Network and Hyper-parameters

In [18]:
layer_size = [1] + [50] * 3 + [1]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)
In [19]:
model = dde.Model(data, net)
model.compile("adam", lr = 0.001)
Compiling model...
'compile' took 0.000507 s

4.2.5. Train & Prediction

In [20]:
losshistory, train_state = model.train(epochs = 5000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Training model...

WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x7f4e4e446710> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7f4e4e446710>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x7f4e4e446710> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7f4e4e446710>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
Step      Train loss              Test loss               Test metric
0         [7.55e-01, 1.00e+00]    [7.19e-01, 1.00e+00]    []  
1000      [5.63e-04, 1.94e-06]    [5.11e-04, 1.94e-06]    []  
2000      [1.27e-04, 2.12e-10]    [1.19e-04, 2.12e-10]    []  
3000      [3.86e-05, 2.92e-09]    [4.01e-05, 2.92e-09]    []  
4000      [1.89e-05, 2.05e-12]    [2.49e-05, 2.05e-12]    []  
5000      [3.49e-04, 1.75e-04]    [3.70e-04, 1.75e-04]    []  

Best model at step 4000:
  train loss: 1.89e-05
  test loss: 2.49e-05
  test metric: []

'train' took 18.033717 s

5. Lab 3: Euler Beam (Solid Mechanics)

5.1. Problem Setup

  • We will solve a Euler beam problem:



  • Problem properties
$$E = 1 \operatorname{pa}, \quad I = 1 \operatorname{kg\cdot m^2}, \quad q = 1 \operatorname{N/m}, \quad l = 1 \operatorname{m}$$


  • Partial differential equations & boundary conditions


$${\partial^4 u \over \partial x^4} + 1 = 0, \qquad \text{where} \quad x \in [0,1]$$

  • One Dirichlet boundary condition on the left boundary:
$$u(0) = 0$$
  • One Neumann boundary condition on the left boundary:


$$u'(0) = 0$$

  • Two boundary conditions on the right boundary:


$$u''(1) = 0, \quad u'''(1) = 0$$

  • The exact solution is


$$u(x) = -{1 \over 24}x^4 + {1 \over 6}x^3 - {1 \over 4}x^2$$

5.2. PINN

5.2.1. Define PDE System

In [21]:
def ddy(x, y):
    return dde.grad.hessian(y, x)

def dddy(x, y):
    return dde.grad.jacobian(ddy(x, y), x)

def pde(x, y):
    dy_xx = ddy(x, y)
    dy_xxxx = dde.grad.hessian(dy_xx, x)
    return dy_xxxx + 1

5.2.2. Define Boundary Condition

In [22]:
def boundary_l(x, on_boundary):
    return on_boundary and np.isclose(x[0], 0)

def boundary_r(x, on_boundary):
    return on_boundary and np.isclose(x[0], 1)

5.2.3. Define Geometry, Implement Boundary Condition

In [23]:
geom = dde.geometry.Interval(0, 1)

bc1 = dde.DirichletBC(geom, lambda x: 0, boundary_l) # u(0) = 0
bc2 = dde.NeumannBC(geom, lambda x: 0, boundary_l) # u'(0) = 0
bc3 = dde.OperatorBC(geom, lambda x, y, _: ddy(x, y), boundary_r) # u''(1) = 0
bc4 = dde.OperatorBC(geom, lambda x, y, _: dddy(x, y), boundary_r) # u'''(1) = 0

# Reference solution to compute the error
def func(x):
    return -(x ** 4) / 24 + x ** 3 / 6 - x ** 2 / 4

data = dde.data.PDE(geom, 
                    pde, 
                    [bc1, bc2, bc3, bc4], 
                    num_domain = 10, 
                    num_boundary = 2, 
                    solution = func, 
                    num_test = 100)
/mnt/disk1/seungmin/env_378/lib/python3.7/site-packages/skopt/sampler/sobol.py:250: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+12=12. 
  total_n_samples))
In [24]:
print('Total training points: ', data.train_x_all.shape)
print('Training initial points: ', data.train_x_bc.shape)

plt.plot(data.train_x_all, np.zeros([data.train_x_all.shape[0], 1]), '.')
plt.xlabel('Time (t)')
plt.show()
Total training points:  (12, 1)
Training initial points:  (4, 1)

5.2.4. Define Network and Hyper-parameters

In [25]:
layer_size = [1] + [20] * 3 + [1]
activation = "tanh"
initializer = "Glorot uniform"
net = dde.maps.FNN(layer_size, activation, initializer)
In [26]:
model = dde.Model(data, net)
model.compile("adam", lr = 0.001, metrics = ["l2 relative error"])
Compiling model...
'compile' took 0.000994 s

5.2.5. Train & Prediction

In [27]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Training model...

WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x7f4e23d57cb0> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7f4e23d57cb0>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x7f4e23d57cb0> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7f4e23d57cb0>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x7f4e23d5d0e0> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7f4e23d5d0e0>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x7f4e23d5d0e0> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7f4e23d5d0e0>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
Step      Train loss                                            Test loss                                             Test metric   
0         [7.35e-01, 0.00e+00, 6.42e-03, 2.03e-03, 1.39e-03]    [7.03e-01, 0.00e+00, 6.42e-03, 2.03e-03, 1.39e-03]    [4.04e-01]    
1000      [7.85e-05, 1.39e-15, 1.57e-08, 2.16e-07, 1.41e-07]    [6.71e-05, 9.38e-15, 1.57e-08, 2.16e-07, 1.41e-07]    [7.95e-04]    
2000      [5.07e-05, 1.19e-08, 1.08e-08, 2.54e-07, 1.61e-08]    [6.51e-05, 1.19e-08, 1.08e-08, 2.54e-07, 1.60e-08]    [2.65e-03]    
3000      [3.64e-05, 1.76e-09, 9.19e-09, 4.49e-08, 1.18e-08]    [4.50e-05, 1.76e-09, 9.16e-09, 4.49e-08, 1.18e-08]    [2.41e-03]    
4000      [2.68e-05, 2.74e-10, 1.91e-09, 1.13e-08, 1.39e-08]    [3.44e-05, 2.77e-10, 1.91e-09, 1.13e-08, 1.39e-08]    [8.06e-04]    
5000      [2.04e-05, 4.56e-11, 3.92e-10, 5.56e-09, 2.94e-09]    [2.74e-05, 4.60e-11, 3.89e-10, 5.56e-09, 2.94e-09]    [8.72e-04]    
6000      [3.07e-04, 3.03e-06, 3.09e-06, 3.13e-07, 3.71e-05]    [3.66e-04, 3.03e-06, 3.09e-06, 3.13e-07, 3.71e-05]    [3.24e-02]    
7000      [1.58e-05, 3.41e-07, 1.06e-07, 1.15e-07, 1.97e-10]    [1.98e-05, 3.41e-07, 1.06e-07, 1.15e-07, 1.96e-10]    [6.80e-03]    
8000      [2.16e-05, 1.28e-07, 7.37e-08, 3.23e-08, 1.64e-06]    [3.07e-05, 1.28e-07, 7.36e-08, 3.23e-08, 1.64e-06]    [6.34e-03]    
9000      [1.35e-04, 1.20e-06, 1.30e-07, 1.62e-07, 1.76e-05]    [1.73e-04, 1.20e-06, 1.30e-07, 1.62e-07, 1.76e-05]    [1.77e-02]    
10000     [7.17e-06, 3.86e-10, 1.32e-10, 6.52e-10, 2.34e-09]    [9.61e-06, 3.89e-10, 1.30e-10, 6.49e-10, 2.34e-09]    [4.56e-04]    

Best model at step 10000:
  train loss: 7.17e-06
  test loss: 9.61e-06
  test metric: [4.56e-04]

'train' took 114.278234 s

6. Lab 4: Navier-Stokes Equations (Fluid Mechanics)

  • Note: strongly recommend you to use GPU rather than CPU. (you can use CoLab GPU for free)
  • If you use Google Colab, please implement below codes
In [28]:
# !pip install deepxde
In [29]:
import tensorflow as tf
import deepxde as dde
import numpy as np
import matplotlib.pyplot as plt
In [30]:
from deepxde.backend.set_default_backend import set_default_backend
set_default_backend("tensorflow")
Setting the default backend to "tensorflow". You can change it in the ~/.deepxde/config.json file or export the DDEBACKEND environment variable. Valid options are: tensorflow.compat.v1, tensorflow, pytorch (all lowercase)

6.1. Problem setup

  • We will solve 2D Navier-Stokes Equations to find velocity profile in infinite parallel plates flow
    • Any fluid flowing in a plates had to enter at some location. The region of flow near where the fluid enters the plates is termed the entrance region and is illustrated in below figure
    • The fluid typically enters the plates with a nearly uniform velocity profile
    • As the fluid moves through the plates, viscous effects cause it to stick to the plates wall (no-slip boundary condition)
    • Thus, a boundary layer is produced along the plates wall such that the initial velocity profile changes with distance along the plates, $x$, until the fluid reaches the end of the hydrodynamic entrance region (which is also called entrance length) beyond which the velocity profile does not vary with $x$





  • Problem properties


$$\rho = 1\operatorname{kg/m^3}, \quad \mu = 1\operatorname{N\cdot s/m^2}, \quad D = 2\operatorname{m}, \quad L = 1\operatorname{m}, \quad u_{in} = 1\operatorname{m/s}, \quad t \in [0,1]$$


  • Hydraulic diameter is


$$\quad D_h = {4A \over p} = \lim\limits_{b\to\infty} {4(2bh) \over {2b+4h}} = 4h = 2\operatorname{m}$$

  • So, the Reynolds number of this system is $Re = 2 $
  • 2D Navier-Stokes Equations & boundary conditions (for steady state)


$$ \begin{align*} u{\partial u \over \partial x} + v{\partial u \over \partial y} + {1 \over \rho}{\partial p \over \partial x} - \nu \ \left({\partial^2 u \over {\partial x^2}} + {\partial^2 u \over {\partial y^2}}\right) &= 0\\\\ u{\partial v \over \partial x} + v{\partial v \over \partial y} + {1 \over \rho}{\partial p \over \partial y} - \nu \ \left({\partial^2 v \over {\partial x^2}} + {\partial^2 v \over {\partial y^2}}\right) &= 0\\\\ {\partial u \over \partial x} + {\partial v \over \partial y} &= 0 \end{align*} $$


  • Two Dirichlet boundary conditions on the plate boundary (no-slip condition),
$$u(x,y) = 0, \quad v(x,y) = 0 \qquad at \quad y = \frac{D}{2} \ or -\frac{D}{2}$$
  • Two Dirichlet boundary condtions at the inlet boundary
$$u(-1,y) = u_{\text{in}}, \quad v(1,y) = 0$$
  • Two Dirichlet boundary condtions at the outlet boundary
$$P(1,y) = 0, \quad v(1,y) = 0$$

6.2. CFD Solution

  • CFD solution of this problem is illustrated in below figures
    • Velocity $u$ and velocity $v$, and pressure $p$, respectively
  • Solve this problem using PINN and then compare with CFD solution







6.3. PINN

6.3.1. Define Parameters

In [31]:
# Properties
rho = 1
mu = 1
u_in = 1
D = 1
Dh = 2
L = 2
Re = rho * Dh * u_in/mu

6.3.2. Define PDE with Boundary & Initial Conditions

In [32]:
def boundary_tube(x, on_boundary):
    _on_tube = np.logical_and(np.logical_or(np.isclose(x[1], -D/2), np.isclose(x[1], D/2)), on_boundary)
    return _on_tube

def boundary_flow(x, on_boundary):
    return on_boundary and np.isclose(x[0], -L/2)

def boundary_end(x, on_boundary):
    return on_boundary and np.isclose(x[0], L/2)
In [33]:
def pde(x, y):
    du_x = dde.grad.jacobian(y, x, i = 0, j = 0)
    du_y = dde.grad.jacobian(y, x, i = 0, j = 1)
    dv_x = dde.grad.jacobian(y, x, i = 1, j = 0)
    dv_y = dde.grad.jacobian(y, x, i = 1, j = 1)
    dp_x = dde.grad.jacobian(y, x, i = 2, j = 0)
    dp_y = dde.grad.jacobian(y, x, i = 2, j = 1)
    du_xx = dde.grad.hessian(y, x, i = 0, j = 0, component = 0)
    du_yy = dde.grad.hessian(y, x, i = 1, j = 1, component = 0)
    dv_xx = dde.grad.hessian(y, x, i = 0, j = 0, component = 1)
    dv_yy = dde.grad.hessian(y, x, i = 1, j = 1, component = 1)
    
    pde_u = y[:,0:1]*du_x + y[:,1:2]*du_y + 1/rho * dp_x - (mu/rho)*(du_xx + du_yy)
    pde_v = y[:,0:1]*dv_x + y[:,1:2]*dv_y + 1/rho * dp_y - (mu/rho)*(dv_xx + dv_yy)
    pde_cont = du_x + dv_y

    return [pde_u, pde_v, pde_cont]

6.3.3. Define Geometry, Implement Boundary Condition

In [34]:
geom = dde.geometry.Polygon([[-L/2, -D/2], [-L/2, D/2], [L/2, D/2], [L/2, 0], [L/2, -D/2]])
In [35]:
bc_tube_u = dde.DirichletBC(geom, lambda x: 0., boundary_tube, component = 0)
bc_tube_v = dde.DirichletBC(geom, lambda x: 0., boundary_tube, component = 1)

bc_flow_u = dde.DirichletBC(geom, lambda x: u_in, boundary_flow, component = 0)
bc_flow_v = dde.DirichletBC(geom, lambda x: 0., boundary_flow, component = 1)
bc_end_p = dde.DirichletBC(geom, lambda x: 0., boundary_end, component = 2)
bc_end_v = dde.DirichletBC(geom, lambda x: 0., boundary_end, component = 1)
In [36]:
data = dde.data.PDE(geom, 
                    pde, 
                    [bc_tube_u, bc_tube_v, bc_flow_u, bc_flow_v, bc_end_v, bc_end_p], 
                    num_domain = 2000, 
                    num_boundary = 200, 
                    num_test = 100)
/mnt/disk1/seungmin/env_378/lib/python3.7/site-packages/skopt/sampler/sobol.py:250: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+207=207. 
  total_n_samples))
Warning: Polygon.uniform_points not implemented. Use random_points instead.
In [37]:
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s=0.5)
plt.xlabel('x-direction length')
plt.ylabel('Distance from the middle of plates (m)')
plt.show()

6.3.4. Define Network and Hyper-parameters

In [38]:
layer_size = [2] + [64] * 5 + [3]
activation = "tanh"
initializer = "Glorot uniform"
net = dde.maps.FNN(layer_size, activation, initializer)
In [39]:
model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.000589 s

6.3.5. Train (Adam optimizer)

In [40]:
losshistory, train_state = model.train(epochs = 10000, display_every = 1000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Training model...

Step      Train loss                                                                                    Test loss                                                                                     Test metric
0         [1.70e-02, 1.42e-02, 1.32e-02, 1.93e-02, 7.75e-03, 8.91e-01, 2.68e-03, 2.68e-03, 4.86e-03]    [1.69e-02, 1.44e-02, 1.34e-02, 1.93e-02, 7.75e-03, 8.91e-01, 2.68e-03, 2.68e-03, 4.86e-03]    []  
1000      [2.04e-03, 1.01e-03, 9.05e-03, 2.97e-02, 1.60e-02, 3.68e-02, 1.68e-03, 1.64e-05, 1.59e-03]    [1.87e-03, 9.42e-04, 1.23e-02, 2.97e-02, 1.60e-02, 3.68e-02, 1.68e-03, 1.64e-05, 1.59e-03]    []  
2000      [9.47e-04, 7.77e-04, 5.60e-03, 2.33e-02, 1.27e-02, 2.68e-02, 1.17e-03, 5.39e-05, 1.05e-04]    [1.34e-03, 7.81e-04, 8.72e-03, 2.33e-02, 1.27e-02, 2.68e-02, 1.17e-03, 5.39e-05, 1.05e-04]    []  
3000      [9.07e-03, 6.53e-03, 3.50e-03, 2.06e-02, 1.06e-02, 2.07e-02, 3.05e-03, 1.14e-04, 3.94e-03]    [8.30e-03, 7.14e-03, 6.09e-03, 2.06e-02, 1.06e-02, 2.07e-02, 3.05e-03, 1.14e-04, 3.94e-03]    []  
4000      [1.44e-03, 6.14e-04, 3.48e-03, 1.78e-02, 9.65e-03, 1.63e-02, 4.73e-03, 5.59e-06, 1.37e-05]    [2.23e-03, 7.00e-04, 5.65e-03, 1.78e-02, 9.65e-03, 1.63e-02, 4.73e-03, 5.59e-06, 1.37e-05]    []  
5000      [3.75e-04, 4.38e-04, 2.88e-03, 1.66e-02, 8.75e-03, 1.46e-02, 5.72e-03, 1.15e-05, 6.39e-05]    [4.96e-04, 4.35e-04, 4.91e-03, 1.66e-02, 8.75e-03, 1.46e-02, 5.72e-03, 1.15e-05, 6.39e-05]    []  
6000      [6.21e-03, 1.85e-03, 2.81e-03, 1.58e-02, 7.51e-03, 1.50e-02, 5.71e-03, 1.70e-05, 2.45e-03]    [5.89e-03, 1.60e-03, 4.84e-03, 1.58e-02, 7.51e-03, 1.50e-02, 5.71e-03, 1.70e-05, 2.45e-03]    []  
7000      [2.50e-04, 4.85e-04, 2.68e-03, 1.53e-02, 7.42e-03, 1.31e-02, 6.41e-03, 9.43e-06, 4.91e-06]    [4.45e-04, 4.63e-04, 4.38e-03, 1.53e-02, 7.42e-03, 1.31e-02, 6.41e-03, 9.43e-06, 4.91e-06]    []  
8000      [2.60e-03, 1.06e-03, 2.65e-03, 1.54e-02, 6.90e-03, 1.23e-02, 6.41e-03, 3.78e-05, 1.41e-04]    [2.59e-03, 1.06e-03, 4.20e-03, 1.54e-02, 6.90e-03, 1.23e-02, 6.41e-03, 3.78e-05, 1.41e-04]    []  
9000      [8.73e-03, 1.59e-03, 2.47e-03, 1.51e-02, 6.62e-03, 1.21e-02, 6.55e-03, 2.35e-05, 5.02e-05]    [1.21e-02, 1.91e-03, 3.98e-03, 1.51e-02, 6.62e-03, 1.21e-02, 6.55e-03, 2.35e-05, 5.02e-05]    []  
10000     [4.10e-03, 3.49e-04, 2.62e-03, 1.51e-02, 5.91e-03, 1.22e-02, 6.22e-03, 9.71e-06, 1.63e-03]    [4.32e-03, 3.70e-04, 4.02e-03, 1.51e-02, 5.91e-03, 1.22e-02, 6.22e-03, 9.71e-06, 1.63e-03]    []  

Best model at step 7000:
  train loss: 4.56e-02
  test loss: 4.75e-02
  test metric: []

'train' took 113.467573 s

6.3.6. Plot Results (Adam optimizer)

In [41]:
samples = geom.random_points(500000)
In [42]:
result = model.net(samples)
color_legend = [[0, 1.5], [-0.3, 0.3], [0, 35]]
for idx in range(3):
    plt.figure(figsize = (20, 4))
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = result[:, idx],
                cmap = 'jet',
                s = 2)
    plt.colorbar()
    plt.clim(color_legend[idx])
    plt.xlim((0-L/2, L-L/2))
    plt.ylim((0-D/2, D-D/2))
    plt.tight_layout()
    plt.show()

6.3.7. Train More (L-BFGS optimizer)

In [43]:
dde.optimizers.config.set_LBFGS_options(maxiter = 10000)
In [44]:
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.090580 s

Training model...

Step      Train loss                                                                                    Test loss                                                                                     Test metric
10000     [4.10e-03, 3.49e-04, 2.62e-03, 1.51e-02, 5.91e-03, 1.22e-02, 6.22e-03, 9.71e-06, 1.63e-03]    [4.32e-03, 3.70e-04, 4.02e-03, 1.51e-02, 5.91e-03, 1.22e-02, 6.22e-03, 9.71e-06, 1.63e-03]    []  
13903     [4.46e-04, 2.69e-04, 4.48e-04, 1.09e-03, 9.70e-04, 4.30e-04, 1.10e-03, 1.07e-05, 9.54e-06]    [1.41e+01, 2.01e+01, 4.07e-04, 1.09e-03, 9.70e-04, 4.30e-04, 1.10e-03, 1.07e-05, 9.54e-06]    []  

Best model at step 13903:
  train loss: 4.77e-03
  test loss: 3.42e+01
  test metric: []

'train' took 836.209761 s

6.3.8. Plot Results (Adam + L-BFGS)

In [45]:
samples = geom.random_points(500000)
In [46]:
result = model.net(samples)
color_legend = [[0, 1.5], [-0.3, 0.3], [0, 35]]
for idx in range(3):
    plt.figure(figsize = (20, 4))
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = result[:, idx],
                cmap = 'jet',
                s = 2)
    plt.colorbar()
    plt.clim(color_legend[idx])
    plt.xlim((0-L/2, L-L/2))
    plt.ylim((0-D/2, D-D/2))
    plt.tight_layout()
    plt.show()

6.3.9. Validation: Plot Velocity Profile at the End of the Plate

  • Fully developed velocity profile at the infinite parallel plates flow are known as


$$u(y) = {3V_{avg} \over 2} \left[ 1- \left( \frac{y}{h} \right)^2 \right]$$




In [47]:
# Analytical solution
x = np.ones([1000,1])
y = np.linspace(-0.5, 0.5, 1000).reshape(1000,1)
end = np.hstack([x, y])

gt_flow = u_in * 1.5 * (1 - ((y)/(D/2))**2)

result = model.net(end)
In [48]:
# CFD solution
import pandas as pd
vel_plot = pd.read_csv('./data_files/vel_plot_x1.txt',sep = '\t', header = None)
In [49]:
for idx in range(1):
    plt.figure(figsize = (9, 6))
    plt.plot(y, result[:, idx], c = 'tomato', linewidth = 3, label = 'PINN solution')
    plt.plot(y, gt_flow, c = 'k', linestyle = '--', label = 'Analytical solution')
    plt.plot(np.linspace(-0.5,0.5,502), vel_plot[1],  c = 'green', linewidth = 3, label = 'CFD')
    plt.xticks(np.linspace(-0.5, 0.5, 5), fontsize = 15)
    plt.yticks(np.linspace(0, 1.5, 11), fontsize = 15)
    plt.tight_layout()
    plt.legend(fontsize = 12)
    plt.xlabel('Distance from the middle of plates (m)', fontsize = 15)
    plt.ylabel('Velocity ($u$)', fontsize = 15)
    plt.show()
In [1]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')